1,497 research outputs found

    Modelling bonds and credit default swaps using a structural model with contagion

    Get PDF
    This paper develops a two-dimensional structural framework for valuing credit default swaps and corporate bonds in the presence of default contagion. Modelling the values of related firms as correlated geometric Brownian motions with exponential default barriers, analytical formulae are obtained for both credit default swap spreads and corporate bond yields. The credit dependence structure is influenced by both a longer-term correlation structure as well as by the possibility of default contagion. In this way, the model is able to generate a diverse range of shapes for the term structure of credit spreads using realistic values for input parameters

    Public Utilities Law

    Get PDF
    This article explains, at a high level, some of the major changes to electric regulation in Virginia in recent years. It also discusses how the General Assembly\u27s new policies have affected retail electric rates and the development of new generation facilities, including renewable energy resources, in the Commonwealth since 1999

    Evaluating Terrain for Harvesting Equipment Selection

    Get PDF
    A terrain evaluation model, utilizing a geographic information system, has been developed as a tool for planning large-scale industrial timber harvesting operations. The model combines terrain descriptions with machine operating criteria to produce maps delineating operable areas. The integration of the model with a harvest planning decision support system is discussed and an example is presented

    Research Notes: A technique for evaluating nodulation response of soybean genotypes with specific Rhizobium strains

    Get PDF
    Previous research on the interactions of Rhizobium strains with host cultivars has involved the testing of small numbers of plants with a specific strain of Rhizobium in Leonard jar assemblies (Leonard, 1943). The Leonard jar technique in our greenhouse requires frequent watering (up to twice daily) and periodic adjustment of the wick element. We have found the traditional Leonard jar assembly technique inadequate to efficiently accommodate the large plant populations required in plant selection and genetic studies of allelism and linkage

    Epitaxy of Fe3O4 on Si(001) by pulsed laser deposition using a TiN/MgO buffer layer

    Full text link
    Epitaxy of oxide materials on silicon (Si) substrates is of great interest for future functional devices using the large variety of physical properties of the oxides as ferroelectricity, ferromagnetism, or superconductivity. Recently, materials with high spin polarization of the charge carriers have become interesting for semiconductor-oxide hybrid devices in spin electronics. Here, we report on pulsed laser deposition of magnetite (Fe3O4) on Si(001) substrates cleaned by an in situ laser beam high temperature treatment. After depositing a double buffer layer of titanium nitride (TiN) and magnesium oxide (MgO), a high quality epitaxial magnetite layer can be grown as verified by RHEED intensity oscillations and high resolution x-ray diffraction.Comment: submitte

    On the relative intensity of Poisson’s spot

    Get PDF
    The Fresnel diffraction phenomenon referred to as Poisson’s spot or spot of Arago has, beside its historical significance, become relevant in a number of fields. Among them are for example fundamental tests of the super-position principle in the transition from quantum to classical physics and the search for extra-solar planets using star shades. Poisson’s spot refers to the positive on-axis wave interference in the shadow of any spherical or circular obstacle. While the spot’s intensity is equal to the undisturbed field in the plane wave picture, its intensity in general depends on a number of factors, namely the size and wavelength of the source, the size and surface corrugation of the diffraction obstacle, and the distances between source, obstacle and detector. The intensity can be calculated by solving the Fresnel–Kirchhoff diffraction integral numerically, which however tends to be computationally expensive. We have therefore devised an analytical model for the on-axis intensity of Poisson’s spot relative to the intensity of the undisturbed wave field and successfully validated it both using a simple light diffraction setup and numerical methods. The model will be useful for optimizing future Poisson-spot matter-wave diffraction experiments and determining under what experimental conditions the spot can be observed

    Projected changes in the Asian-Australian monsoon region in 1.5°C and 2.0°C global-warming scenarios

    Get PDF
    In light of the Paris Agreement, it is essential to identify regional impacts of half a degree additional global warming to inform climate adaptation and mitigation strategies. We investigate the effects of 1.5°C and 2.0°C global warming above pre-industrial conditions, relative to present day (2006-2015), over the Asian-Australian monsoon region (AAMR) using five models from the Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI) project. There is considerable inter-model variability in projected changes to mean climate and extreme events in 2.0°C and 1.5°C scenarios. There is high confidence in projected increases to mean and extreme surface temperatures over AAMR, as well as more-frequent persistent daily temperature extremes over East Asia, Australia and northern India with an additional 0.5°C warming, which are likely to occur. Mean and extreme monsoon precipitation amplify over AAMR, except over Australia at 1.5°C where there is uncertainty in the sign of the change. Persistent daily extreme precipitation events are likely to become more frequent over parts of East Asia and India with an additional 0.5°C warming. There is lower confidence in projections of precipitation change than in projections of surface temperature change. These results highlight the benefits of limiting the global-mean temperature change to 1.5°C above pre-industrial, as the severity of the above effects increases with an extra 0.5°C warming

    Historical tsunami observability for Izu–Bonin–Mariana sources

    Get PDF
    The Izu–Bonin–Mariana Subduction System (IBM) is one of the longest subduction zones in the world with no instrumental history of shallow focus, great earthquakes (Mw \u3e 8). Over the last 50 years, researchers have speculated on the reason for the absence of large magnitude, shallow seismicity on this plate interface, exploring factors from plate age to convergence rate. We approach the question from a different point of view: what if the IBM has hosted great earthquakes and no documentable evidence was left? To address the question of observability, we model expected tsunami wave heights from nine great earthquake scenarios on the IBM at selected locations around the Pacific Basin with an emphasis on locations having the possibility for a long, written record. Many circum-Pacific locations have extensive written records of tsunami run-up with some locations in Japan noting tsunami back to 684 CE. We find that most IBM source models should theoretically be observable at historically inhabited locations in the Pacific Basin. Surprisingly, however, some IBM source models for earthquakes with magnitudes as high as Mw 8.7 produce tsunami wave heights that would be essentially unobservable at most historically populated Pacific Basin locations. These scenarios aim to provide a constraint on the upper bound for earthquake magnitudes in the IBM over at least the past 400 years

    Efficient cosmological parameter sampling using sparse grids

    Full text link
    We present a novel method to significantly speed up cosmological parameter sampling. The method relies on constructing an interpolation of the CMB-log-likelihood based on sparse grids, which is used as a shortcut for the likelihood-evaluation. We obtain excellent results over a large region in parameter space, comprising about 25 log-likelihoods around the peak, and we reproduce the one-dimensional projections of the likelihood almost perfectly. In speed and accuracy, our technique is competitive to existing approaches to accelerate parameter estimation based on polynomial interpolation or neural networks, while having some advantages over them. In our method, there is no danger of creating unphysical wiggles as it can be the case for polynomial fits of a high degree. Furthermore, we do not require a long training time as for neural networks, but the construction of the interpolation is determined by the time it takes to evaluate the likelihood at the sampling points, which can be parallelised to an arbitrary degree. Our approach is completely general, and it can adaptively exploit the properties of the underlying function. We can thus apply it to any problem where an accurate interpolation of a function is needed.Comment: Submitted to MNRAS, 13 pages, 13 figure
    • …
    corecore